Skip to content

Conversation

@iblancasa
Copy link
Contributor

Link to tracking issue

Fixes #44472

Testing

Added some tests and manual testing.

name, namespace := nAddr[0], "default"
if len(nAddr) > 1 {
namespace = nAddr[1]
parts := strings.Split(service, ".")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the fix, according to the changes you made, the loadbalancing exporter should work correctly (even without your fix), when using the standard <svcName>.<namespace> domain name format of the headless service. I am still experiencing the same issue even when using the collector-backend.default in the config mentioned in the issue. Am I missing something here? Thanks!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me retest with that and see if I missed something during the fix.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just reran with the provided config against a local kind cluster using a collector build that includes the FQDN parsing fix. With telemetrygen driving 50 traces through a port-forward, both backend pods received traffic and kubectl logs deploy/lb-collector never produced the couldn’t find the exporter for the endpoint "" error. So, I think, <svc>.<namespace> now works as expected.

The front collector’s service account must be allowed to list/watch endpointslices in the namespace. Without the RBAC role binding, the informer never populates and the ring stays empty, which yields the same error message even with a correct service string.

Signed-off-by: Israel Blancas <[email protected]>
@atoulme
Copy link
Contributor

atoulme commented Nov 27, 2025

@rlankfo please review as codeowner

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Backend creation not succeeding with k8s resolver

5 participants